Leveraging Learners by Gradient
نویسنده
چکیده
Recent interpretations of the Adaboost algorithm view it as performing a gradient descent on a potential function. Simply changing the potential function allows one to create new algorithms related to AdaBoost. However, these new algorithms are generally not known to have the formal boosting property. This paper examines the question of which potential functions lead to new algorithms that are boosters. The two main results are general sets of conditions on the potential; one set implies that the resulting algorithm is a booster, while the other implies that the algorithm is not. These conditions are applied to previously studied potential functions , such as those used by LogitBoost and Doom II.
منابع مشابه
Leveraging for Regression
In this paper we examine master regression algorithms that leverage base regressors by iteratively calling them on modified samples. The most successful leveraging algorithm for classification is AdaBoost, an algorithm that requires only modest assumptions on the base learning method for its good theoretical bounds. We present three gradient descent leveraging algorithms for regression and prov...
متن کاملA Geometric Approach to Leveraging
AdaBoost is a popular and eeective leveraging procedure for improving the hypotheses generated by weak learning algorithms. AdaBoost and many other leveraging algorithms can be viewed as performing a constrained gradient descent over a potential function. At each iteration the distribution over the sample given to the weak learner is the direction of steepest descent. We introduce a new leverag...
متن کاملI Nefficiency of Stochastic Gradient Descent with Larger Mini - Batches ( and More Learners )
Stochastic Gradient Descent (SGD) and its variants are the most important optimization algorithms used in large scale machine learning. Mini-batch version of stochastic gradient is often used in practice for taking advantage of hardware parallelism. In this work, we analyze the effect of mini-batch size over SGD convergence for the case of general non-convex objective functions. Building on the...
متن کاملLeveraging Engagement and Participation in e-Learning with Trust
This article describes a project that builds on authors previously body of knowledge on Trust and uses it to leverage higher levels of engagement in eLearning contexts. The presented research aims to investigate on unobtrusively strategies to evaluate a toolset of Trust indicators that monitor trust levels thus facilitating the deployment of trust level regulation interventions. So far results ...
متن کامل